Proximal ADMM for nonconvex and nonsmooth optimization
نویسندگان
چکیده
By enabling the nodes or agents to solve small-sized subproblems achieve coordination, distributed algorithms are favored by many networked systems for efficient and scalable computation. While convex problems, substantial available, results more broad nonconvex counterparts extremely lacking. This paper develops a algorithm class of nonsmooth problems featured (i) objective formed both separate composite components regarding decision variables interconnected agents, (ii) local bounded constraints, (iii) coupled linear constraints. problem is directly originated from smart buildings also in other domains. To provide with convergence guarantee, we revise powerful alternating direction method multiplier (ADMM) proposed proximal ADMM. Specifically, noting that main difficulty establish optimization ADMM assume boundness dual updates, propose update discounted manner. leads establishment so-called sufficiently decreasing lower Lyapunov function, which critical convergence. We prove converges some approximate stationary points. besides showcase efficacy performance numerical example concrete application multi-zone heating, ventilation, air-conditioning (HVAC) control buildings.
منابع مشابه
Global Convergence of ADMM in Nonconvex Nonsmooth Optimization
In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, φ(x0, . . . , xp, y), subject to coupled linear equality constraints. Our ADMM updates each of the primal variables x0, . . . , xp, y, followed by updating the dual variable. We separate the variable y from xi’s as it has a spe...
متن کاملStochastic ADMM for Nonsmooth Optimization
Alternating Direction Method of Multipliers (ADMM) gained lost of attention due to LargeScale Machine Learning demands. • Classic (70’s) and flexible, Survey paper: (Boyd 2009) • Applications: compressed sensing (Yang & Zhang, 2011), image restoration (Goldstein & Osher, 2009), video processing and matrix completion (Goldfarb et al., 2010) • Recent variations: Linearized (Goldfarb et al., 2010;...
متن کاملAsynchronous Stochastic Proximal Methods for Nonconvex Nonsmooth Optimization
We study stochastic algorithms for solving non-convex optimization problems with a convex yet possibly non-smooth regularizer, which nd wide applications in many practical machine learning applications. However, compared to asynchronous parallel stochastic gradient descent (AsynSGD), an algorithm targeting smooth optimization, the understanding of the behavior of stochastic algorithms for the n...
متن کاملPerturbed Proximal Primal Dual Algorithm for Nonconvex Nonsmooth Optimization
In this paper we propose a perturbed proximal primal dual algorithm (PProx-PDA) for an important class of optimization problems whose objective is the sum of smooth (possibly nonconvex) and convex (possibly nonsmooth) functions subject to a linear equality constraint. This family of problems has applications in a number of statistical and engineering applications, for example in high-dimensiona...
متن کاملA Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
We analyze stochastic gradient algorithms for optimizing nonconvex, nonsmooth finite-sum problems. In particular, the objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a possibly non-differentiable but convex component. We propose a proximal stochastic gradient algorithm based on variance reduction, called ProxSVRG+. The algorithm is ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Automatica
سال: 2022
ISSN: ['1873-2836', '0005-1098']
DOI: https://doi.org/10.1016/j.automatica.2022.110551